323 research outputs found

    Extending FAIR to FAIREr: Cognitive Interoperability and the Human Explorability of Data and Metadata

    Full text link
    Making data and metadata FAIR (Findable, Accessible, Interoperable, Reusable) has become an important objective in research and industry, and knowledge graphs and ontologies have been cornerstones in many going-FAIR strategies. In this process, however, human-actionability of data and metadata has been lost sight of. Here, in the first part, I discuss two issues exemplifying the lack of human-actionability in knowledge graphs and I suggest adding the Principle of human Explorability to extend FAIR to the FAIREr Guiding Principles. Moreover, in its interoperability framework and as part of its GoingFAIR strategy, the European Open Science Cloud initiative distinguishes between technical, semantic, organizational, and legal interoperability and I argue to add cognitive interoperability. In the second part, I provide a short introduction to semantic units and discuss how they increase the human explorability and cognitive interoperability of knowledge graphs. Semantic units structure a knowledge graph into identifiable and semantically meaningful subgraphs, each represented with its own resource that instantiates a corresponding semantic unit class. Three categories of semantic units can be distinguished: Statement units model individual propositions, compound units are semantically meaningful collections of semantic units, and question units model questions that translate into queries. I conclude with discussing how semantic units provide a framework for the development of innovative user interfaces that support exploring and accessing information in the graph by reducing its complexity to what currently interests the user, thereby significantly increasing the cognitive interoperability and thus human-actionability of knowledge graphs

    FAIR data representation in times of eScience: a comparison of instance-based and class-based semantic representations of empirical data using phenotype descriptions as example

    Get PDF
    Background: The size, velocity, and heterogeneity of Big Data outclasses conventional data management tools and requires data and metadata to be fully machine-actionable (i.e., eScience-compliant) and thus findable, accessible, interoperable, and reusable (FAIR). This can be achieved by using ontologies and through representing them as semantic graphs. Here, we discuss two different semantic graph approaches of representing empirical data and metadata in a knowledge graph, with phenotype descriptions as an example. Almost all phenotype descriptions are still being published as unstructured natural language texts, with far-reaching consequences for their FAIRness, substantially impeding their overall usability within the life sciences. However, with an increasing amount of anatomy ontologies becoming available and semantic applications emerging, a solution to this problem becomes available. Researchers are starting to document and communicate phenotype descriptions through the Web in the form of highly formalized and structured semantic graphs that use ontology terms and Uniform Resource Identifiers (URIs) to circumvent the problems connected with unstructured texts. Results: Using phenotype descriptions as an example, we compare and evaluate two basic representations of empirical data and their accompanying metadata in the form of semantic graphs: the class-based TBox semantic graph approach called Semantic Phenotype and the instance-based ABox semantic graph approach called Phenotype Knowledge Graph. Their main difference is that only the ABox approach allows for identifying every individual part and property mentioned in the description in a knowledge graph. This technical difference results in substantial practical consequences that significantly affect the overall usability of empirical data. The consequences affect findability, accessibility, and explorability of empirical data as well as their comparability, expandability, universal usability and reusability, and overall machine-actionability. Moreover, TBox semantic graphs often require querying under entailment regimes, which is computationally more complex. Conclusions: We conclude that, from a conceptual point of view, the advantages of the instance-based ABox semantic graph approach outweigh its shortcomings and outweigh the advantages of the class-based TBox semantic graph approach. Therefore, we recommend the instance-based ABox approach as a FAIR approach for documenting and communicating empirical data and metadata in a knowledge graph

    Verantwortung für die eigene Gesundheit fördern : Bewegungsprogramme stärken Langzeitarbeitslose über 50

    Get PDF
    Krank durch zu wenig Bewegung : Erkrankungen wie Adipositas, Diabetes mellitus Typ II, Hypertonie, degenerative Gelenkerkrankungen, Osteoporose oder Rückenschmerzen sind unter anderem die Folge eines bewegungsarmen Lebensstils. Die Weltgesundheitsorganisation WHO schätzt die daraus folgenden Todesfälle auf jährlich etwa eine Million in der Europäischen Union. Das Robert Koch-Institut hat errechnet, dass in Deutschland mehr als 6500 Herz-Kreislauf-Todesfälle pro Jahr vermieden würden, wenn lediglich die Hälfte der körperlich inaktiven Männer im Alter von 40 bis 69 Jahren gemäßigten körperlichen Aktivitäten nachginge. Empfohlen wird ein wöchentlicher Umfang von mindestens 150Minuten moderater Bewegung. Dies entspricht beispielsweise zügigen Spaziergängen, Fahrradfahren oder vergleichbaren Belastungen, die das Herz-Kreislauf-System und die Atemfunktion anregen

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs

    Anatomy and the type concept in biology show that ontologies must be adapted to the diagnostic needs of research

    Get PDF
    Background: In times of exponential data growth in the life sciences, machine-supported approaches are becoming increasingly important and with them the need for FAIR (Findable, Accessible, Interoperable, Reusable) and eScience-compliant data and metadata standards. Ontologies, with their queryable knowledge resources, play an essential role in providing these standards. Unfortunately, biomedical ontologies only provide ontological definitions that answer What is it? questions, but no method-dependent empirical recognition criteria that answer How does it look? questions. Consequently, biomedical ontologies contain knowledge of the underlying ontological nature of structural kinds, but often lack sufficient diagnostic knowledge to unambiguously determine the reference of a term. Results: We argue that this is because ontology terms are usually textually defined and conceived as essentialistic classes, while recognition criteria often require perception-based definitions because perception-based contents more efficiently document and communicate spatial and temporal information—a picture is worth a thousand words. Therefore, diagnostic knowledge often must be conceived as cluster classes or fuzzy sets. Using several examples from anatomy, we point out the importance of diagnostic knowledge in anatomical research and discuss the role of cluster classes and fuzzy sets as concepts of grouping needed in anatomy ontologies in addition to essentialistic classes. In this context, we evaluate the role of the biological type concept and discuss its function as a general container concept for groupings not covered by the essentialistic class concept. Conclusions: We conclude that many recognition criteria can be conceptualized as text-based cluster classes that use terms that are in turn based on perception-based fuzzy set concepts. Finally, we point out that only if biomedical ontologies model also relevant diagnostic knowledge in addition to ontological knowledge, they will fully realize their potential and contribute even more substantially to the establishment of FAIR and eScience-compliant data and metadata standards in the life sciences

    Anatomy and the type concept in biology show that ontologies must be adapted to the diagnostic needs of research

    Get PDF
    Background: In times of exponential data growth in the life sciences, machine-supported approaches are becoming increasingly important and with them the need for FAIR (Findable, Accessible, Interoperable, Reusable) and eScience-compliant data and metadata standards. Ontologies, with their queryable knowledge resources, play an essential role in providing these standards. Unfortunately, biomedical ontologies only provide ontological definitions that answer What is it? questions, but no method-dependent empirical recognition criteria that answer How does it look? questions. Consequently, biomedical ontologies contain knowledge of the underlying ontological nature of structural kinds, but often lack sufficient diagnostic knowledge to unambiguously determine the reference of a term. Results: We argue that this is because ontology terms are usually textually defined and conceived as essentialistic classes, while recognition criteria often require perception-based definitions because perception-based contents more efficiently document and communicate spatial and temporal information—a picture is worth a thousand words. Therefore, diagnostic knowledge often must be conceived as cluster classes or fuzzy sets. Using several examples from anatomy, we point out the importance of diagnostic knowledge in anatomical research and discuss the role of cluster classes and fuzzy sets as concepts of grouping needed in anatomy ontologies in addition to essentialistic classes. In this context, we evaluate the role of the biological type concept and discuss its function as a general container concept for groupings not covered by the essentialistic class concept. Conclusions: We conclude that many recognition criteria can be conceptualized as text-based cluster classes that use terms that are in turn based on perception-based fuzzy set concepts. Finally, we point out that only if biomedical ontologies model also relevant diagnostic knowledge in addition to ontological knowledge, they will fully realize their potential and contribute even more substantially to the establishment of FAIR and eScience-compliant data and metadata standards in the life sciences

    Modeling J /Psi production and absorption in a microscopic nonequilibrium approach

    Get PDF
    Charmonium production and absorption in heavy ion collisions is studied with the Ultrarelativisitic Quantum Molecular Dynamics model. We compare the scenario of universal and time independent color-octet dissociation cross sections with one of distinct color-singlet J/psi, psi 2 and CHIc states, evolving from small, color transparent configurations to their asymptotic sizes. The measured J/psi production cross sections in pA and AB collisions at SPS energies are consistent with both purely hadronic scenarios. The predicted rapidity dependence of J/psi suppression can be used to discriminate between the two experimentally. The importance of interactions with secondary hadrons and the applicability of thermal reaction kinetics to J/psi absorption are in- vestigated. We discuss the e ect of nuclear stopping and the role of leading hadrons. The dependence of the 2/J/psi ratio on the model assumptions and the possible influence of refeeding processes is also studied

    Dissociation rates of J / psi's with comoving mesons : thermal versus nonequilibrium scenario.

    Get PDF
    We study J/psi dissociation processes in hadronic environments. The validity of a thermal meson gas ansatz is tested by confronting it with an alternative, nonequilibrium scenario. Heavy ion collisions are simulated in the frame- work of the microscopic transport model UrQMD, taking into account the production of charmonium states through hard parton-parton interactions and subsequent rescattering with hadrons. The thermal gas and microscopic transport scenarios are shown to be very dissimilar. Estimates of J/psi survival probabilities based on thermal models of comover interactions in heavy ion collisions are therefore not reliable

    Contact resistance and overlapping capacitance in flexible sub-micron long oxide thin-film transistors for above 100 MHz operation

    Get PDF
    In recent years new forms of electronic devices such as electronic papers, flexible displays, epidermal sensors, and smart textiles have become reality. Thin-film transistors (TFTs) are the basic blocks of the circuits used in such devices and need to operate above 100 MHz to efficiently treat signals in RF systems and address pixels in high resolution displays. Beyond the choice of the semiconductor, i.e., silicon, graphene, organics, or amorphous oxides, the junctionless nature of TFTs and its geometry imply some limitations which become evident and important in devices with scaled channel length. Furthermore, the mechanical instability of flexible substrates limits the feature size of flexible TFTs. Contact resistance and overlapping capacitance are two parasitic effects which limit the transit frequency of transistors. They are often considered independent, while a deeper analysis of TFTs geometry imposes to handle them together; in fact, they both depend on the overlapping length (LOV) between source/drain and the gate contacts. Here, we conduct a quantitative analysis based on a large number of flexible ultra-scaled IGZO TFTs. Devices with three different values of overlap length and channel length down to 0.5 μm are fabricated to experimentally investigate the scaling behavior of the transit frequency. Contact resistance and overlapping capacitance depend in opposite ways on LOV. These findings establish routes for the optimization of the dimension of source/drain contact pads and suggest design guidelines to achieve megahertz operation in flexible IGZO TFTs and circuits

    Toward Representing Research Contributions in Scholarly Knowledge Graphs Using Knowledge Graph Cells

    Get PDF
    There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. Toward this end, in this work, we propose a novel semantic data model for modeling the contribution of scientific investigations. Our model, i.e. the Research Contribution Model (RCM), includes a schema of pertinent concepts highlighting six core information units, viz. Objective, Method, Activity, Agent, Material, and Result, on which the contribution hinges. It comprises bottom-up design considerations made from three scientific domains, viz. Medicine, Computer Science, and Agriculture, which we highlight as case studies. For its implementation in a knowledge graph application we introduce the idea of building blocks called Knowledge Graph Cells (KGC), which provide the following characteristics: (1) they limit the expressibility of ontologies to what is relevant in a knowledge graph regarding specific concepts on the theme of research contributions; (2) they are expressible via ABox and TBox expressions; (3) they enforce a certain level of data consistency by ensuring that a uniform modeling scheme is followed through rules and input controls; (4) they organize the knowledge graph into named graphs; (5) they provide information for the front end for displaying the knowledge graph in a human-readable form such as HTML pages; and (6) they can be seamlessly integrated into any existing publishing process thatsupports form-based input abstracting its semantic technicalities including RDF semantification from the user. Thus RCM joins the trend of existing work toward enhanced digitalization of scholarly publication enabled by an RDF semantification as a knowledge graph fostering the evolution of the scholarly publications beyond written text
    • …
    corecore